For the past few years, Google has been written off as the “new IBM” — an organization who invented cutting-edge technology, but couldn’t use it as well as their competitors. Google researchers are arguably responsible for modern AI progress; after all, they invented transformers. Transformers are the “T” in ChatGPT, a breakthrough neural network architecture that most modern AI models are built atop. Google also develops TensorFlow, one of the most popular machine learning libraries, and builds their own chips called Tensor Processing Units, designed specifically for AI training. Despite these innovations, their flagship AI model Gemini has lagged behind those built by smaller rivals, like OpenAI and Anthropic, for the past few years.
Until last week. On November 18th, Google released their Gemini 3 model to rave reviews. It has risen to the top of the leaderboard across several benchmarks designed to rate LLM performance. It resoundingly beat OpenAI’s previous high score of 31.64 on Humanity’s Last Exam, with a score of 37.4. On LMArena, a platform where users blindly interact with several chatbots then vote for the best one, Google has topped the charts within most categories.
So how did Google’s Gemini become so successful? One reason is the aforementioned Tensor Processing Units. Gemini 3 was trained entirely on Google’s proprietary chips. These are extremely energy efficient; using significantly less power per computation than traditional GPUs and therefore costing less. By training Gemini 3 entirely on TPUs, Google has shown that it can sidestep the “Nvidia tax” —the premium that every other AI company must pay for access to scarce, expensive GPUs made by Nvidia, which are considered industry standard. Some experts estimate that Nvidia is making an 823% profit on each chip they sell, meaning that Google has significantly reduced their AI spending via TPU development. They’ve also begun selling TPUs to Anthropic, meaning their hardware could become a source of revenue.
Aside from their proprietary hardware, Google’s ability to distribute
their AI products is unmatched. They recently
published a list of 1000 companies who use their AI products for
vastly different tasks — from creating an adaptable “World’s Smartest
Buildboard” ad-campaign to drafting legal contracts. Google also owns
some of the world’s most-used websites and apps, like Youtube, Gmail,
and Google Maps, and the largest smartphone operating system, Android.
This will allow them to easily distribute Gemini to millions of users,
without striking up external deals. Google’s ownership of these large
sites keeps their revenue high, meaning they can charge less for AI use
than their smaller competitors OpenAI and Anthropic, who
are already running their models at a loss. All of this suggests
that Google will become the dominant name in AI within the next
decade.
Google’s domestic rivals are also facing struggles. OpenAI has committed to a series of blockbuster deals, promising to spend $1.4 trillion on data centers within roughly the next decade, yet made a $12 billion loss last quarter. According to unverified data from industry analyst Ed Zitron, OpenAI’s quarterly inference costs — the computational cost of responding to user queries — consistently exceed quarterly revenues, raising questions about their path to profitability.
Public confidence worsened in OpenAI after their CFO, Sarah Friar,
appeared to call
for government backstops for their data center spending at a Wall
Street Journal-hosted event (though she later walked back her comments
in
a LinkedIn post). A publicly
available letter from Christopher Lehane, OpenAI’s Chief Global
Affairs Officer, to the White House Science & Technology Advisor
Michael Kratsios explicitly requested federal government assistance in
paying for AI infrastructure, asking them to “deploy grants,
cost-sharing agreements, loans, or loan guarantees to expand industrial
base capacity and resilience.” It appears as though OpenAI is squirming;
asking the government to step in and guarantee their risky deals with
taxpayer dollars.
Meta’s AI spending is causing concern among their investors. On their October 29th earnings call, CEO Mark Zuckerberg laid out plans to increase their capital expenditures from $66bn to $72bn, citing spending on data centers and chips. Afterwards, their stock fell off of a cliff — it has dropped about 20% as of November 23rd. Plus, Meta is facing retention issues. This month, their Chief Scientist Yann LeCun announced that he’s leaving, to found his own startup. Despite offering top AI researchers higher salaries than some NFL quarterbacks, Meta has seen many swift departures — including former OpenAI researchers threatening to quit within weeks of their hiring date.
Google’s last large domestic competitor, xAI, does not strike me as a serious company, primarily due to their leadership. They are pursuing funds from investors that would value xAI at $230bn, but their growth has been marred by a series of incidents. In July 2025, their main chatbot Grok — which has been given the ability to interact with users on X (formerly Twitter) — went on a racist, antisemetic tirade, including threats and sexual harassment directed at Linda Yaccarino, X’s then-CEO. Yaccarino resigned the following day. For a few days in May 2025, Grok began ranting about “white genocide” in South Africa, while answering completely unrelated questions. Both of these incidents seem to indicate that xAI’s culture is more concerned with Elon Musks agenda of eyeroll-worthy edgy humor, and race science rhetoric than with changing the world for the better. Hopefully, accomplished AI researchers will steer clear, despite xAI’s eye-popping evaluation.
While the AI landscape is volatile— a new breakthrough from any company
could change the landscape entirely— I believe that Google will become
the leading AI company in the United States due to its vertical
integration, diverse revenue streams, and the struggles of its
competitors.